Backing out an optimization, this code appears to make raid5 misbehave and
fill the log with garbage. This caused recovery to fail, and sometimes a
clean mount to fail. I am unable to replicate these problems once the code
is removed. There will be a performance impact from this, but I am not
sure how much. Hopefully in the long term we can find what is going wrong
at the md layer and put this optimization back again.
Steve
p.s. for those of you not following the discussion recently, the best
performance for raid5 is an external log on a non-raid5 device, raid1
if you want to maintain the backup copy of the data.
Date: Mon Jul 23 15:54:45 PDT 2001
Workarea: jen.americas.sgi.com:/src/lord/xfs-linux.2.4
The following file(s) were checked into:
bonnie.engr.sgi.com:/isms/slinx/2.4.x-xfs
Modid: 2.4.x-xfs:slinx:99431a
linux/fs/pagebuf/page_buf.c - 1.95
http://gibble.americas.sgi.com/cgi-bin/cvsweb.cgi/slinx_2.4.x-xfs-nodel/linux/fs/pagebuf/page_buf.c.diff?r1=text&tr1=1.95&r2=text&tr2=1.94&f=h
http://oss.sgi.com/cgi-bin/cvsweb.cgi/linux-2.4-xfs/linux/fs/pagebuf/page_buf.c.diff?r1=text&tr1=1.95&r2=text&tr2=1.94&f=h
- Backing out the change which made pagebuf do larger I/O on md devices
in some cases. This will increase the cpu overhead in this case, but
fix the log on these devices which was broken.