Hi,
Quoting Bhagi rathi <jahnu77@xxxxxxxxx>:
I hope your problem with xfs_repair is resolved.
It wasn't an XFS-centric problem, after all. Running cfdisk on the
system would likewise fail because the partition had been set beyond
the disk boundary. I'm not sure about how this happened. Either the
controller reported a large size to Debian during installation, or
Debian mis-read the reported size on installation. At any rate,
redoing the entire installation (including RAID setup) with the same
procedure resulted in the correct (ie: smaller) disk boundaries, so
the server is back up with the new size.
Please set your incore log buffer size as a sub-multiple of your log size,
log size % in core buffer size should be zero (modulo is the operator). This
is advisable, though not mandatory. Having huge incore buffer size is of no
help if your on disk log size isn't big. This might solve your problem with
repair and recovery after some reboots. Make sure that total incore buffer
size is less than on disk logsize.
Previously, we were using version=1 logs with size=32768b, and then
mounted using logbufs=8,logbsize=32768.
Now, we are using version=2 logs with size=32768b, and then mounted
using logbufs=8,logbsize=256k.
If I understand your advise correctly, I should not mount with
logbsize > 32k, or I should create the filesystem using version=2 logs
with size=256k. Is this understanding correct?
Are there generic optimization suggestions for I/O-intensive servers
in general, as far as on-disk and in-memory log buffer sizes are
concerned?
Please advise.
Thank you very much.
--
Federico Sevilla III
F S 3 Consulting Inc.
http://www.fs3.ph
|