Re Dave, Eric and all,
thanks a lot for the quick answers. This is definitely a version
issue I did not take into account at first. We ran into it when
slurm found small differences in scratch space. All (diskless)
SL6.5 computenodes have xfsprogs-3.1.1-10.el6.x86_64 installed,
but creation of xfs was done with older versions on SL5.
Recreation of xfs after disk problems changed the xfs outcome
as seen.
From the beginning the plan was to recreate xfs at every boot
time. For some reason this did not work for (all) the diskless
machines and we dropped that in favour of a find+rm.
Now I reconsider a format at boot time again.
Cheers
Anton
On 09/25/2014 11:19 PM, Dave Chinner wrote:
On Thu, Sep 25, 2014 at 10:01:08PM +0200, Gamel Anton J. wrote:
Dear all,
servers with identical disk setup (HW RAID0 H310a):
Disk /dev/sda: 598.9 GB, 598879502336 bytes
/dev/sda1 1 49152 394813439+ 82 Linux swap / Solaris
/dev/sda2 49153 50176 8225280 83 Linux
/dev/sda3 6399 72809 533446357+ 44 Unknown
mkfs.xfs creates on 28 of them:
meta-data=/dev/sda3 isize=256 agcount=16, agsize=8335099 blks
= sectsz=512 attr=1, projid32bit=0
data = bsize=4096 blocks=133361584, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
THat's clearly an old version of mkfs - it's selected version 1 logs
and attr1 by default and a log size of only 128MB. mkfs.xfs has
defaulted to v2 logs since 3.0.0 (2007).
but on four out of them:
meta-data=/dev/sda3 isize=256 agcount=4, agsize=33340398 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=133361589, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=65117, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Clearly much newer - attr2, log v2, log larger than 128MB...
mkfs.xfs -V on each of the nodes will tell you that they are running
different versions of mkfs, I think.
The only way it worked was to dump nodeA:/dev/sda3 to nodeB:/dev/sda3
Is there an explanation? May be I missed something ... hints?
If you are building a new storage system, then I'd highly recommend
all the nodes run the same software and that software is the newest
possible release you can get....
Cheers,
Dave.
--
Beste Gruesse
Anton J. Gamel
HPC und GRID-Computing
Physikalisches Institut
Abteilung Professor Schumacher
c/o Rechenzentrum der Universität Freiburg
Arbeitsgruppe Dr. Winterer
Hermann-Herder-Straße 10
79104 Freiburg
Tel.: ++49 (0)761 203 -4670
--
Es bleibt immer ein Rest - und ein Rest vom Rest.
smime.p7s
Description: S/MIME Cryptographic Signature
|