[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: multiple writes of same block
Thanks very much for this info, I am going to tied up in meetings and
then sat on extremely long plane trips for a couple more days. Hopefuly
I can use this to make some progress next week. Can you possibly send
your raidtab file.
My first observation though is that in terms of where they are in the
filesystem itself, these requests are no where near each other and are
not going to cause a problem.
Steve
> Steve,
>
> Doing some more rigorous tests, I find that the 'multiple x requests for
> sector y' error can come about quite often with the raid-5 driver. In
> about 70 minutes, the message was given 40 times. Here is my
> environment and the type of test being run...
>
> Linux 2.4.9, xfs snapshot 26-Aug-01
> 64kb memory, PPC platform,
> 5 disk raid 5 (1 disk is FibreChannel array using Qlogicfc driver, 4
> scsi disks using sym53c8xx driver)
> XFS was created using 'xfs -b 4096 /dev/md2'. The size is about 70GB.
>
> Test observations...
> Running 10 processes that do random mix of file io (read, write, append,
> open, close, create, delete). HOWEVER, The first phase of the test
> populates the disk with files is all create and write operations. Each
> process is working on its own subdirectory and creating a mix of file
> sizes 2K-10MB. This create phase produces a lot of the errors.
>
> <>< Lance.
>
>
> D. Lance Robinson wrote:
>
> > I added the suggested printk statement to the raid5 driver and got a
> > few instances of the warning/error while running a test. The test
> > consists of five processes doing a mix of access patterns (reads,
> > writes, append, open, close, deletes) to their own set of files and
> > directories. Each file is accessed by only one process. The 5 errors
> > came at random times within a 7 hour test period using the 2.4.9
> > kernel and the 17-Aug-01 xfs snapshot. This is run on a PPC.
> >
> > Let me know if you have any patches to test or if any other info would
> > be helpful. Unfortunately, I cannot release the test being used.
>
> raid5: multiple 1 requests for sector 12947328
> raid5: new bh at blk 0x62c7f0 len 0x1000, existing blk 0x3163f80 len 0x1000
> raid5: multiple 1 requests for sector 12948352
> raid5: new bh at blk 0x62c9f0 len 0x1000, existing blk 0x3164f80 len 0x1000
> raid5: multiple 1 requests for sector 12949248
> raid5: new bh at blk 0x62cb80 len 0x1000, existing blk 0x3165c00 len 0x1000
> raid5: multiple 1 requests for sector 12949888
> raid5: new bh at blk 0x3166780 len 0x1000, existing blk 0x62ccf0 len 0x1000
> raid5: multiple 1 requests for sector 4697184
> raid5: new bh at blk 0x23d60c len 0x1000, existing blk 0x11eb060 len 0x1000
> raid5: multiple 1 requests for sector 19149312
> raid5: new bh at blk 0x490c980 len 0x1000, existing blk 0x921930 len 0x1000
> raid5: multiple 1 requests for sector 19150464
> raid5: new bh at blk 0x921b50 len 0x1000, existing blk 0x490da80 len 0x1000
> raid5: multiple 1 requests for sector 17107808
> raid5: new bh at blk 0x82859c len 0x1000, existing blk 0x4142ce0 len 0x1000
> raid5: multiple 1 requests for sector 19156224
> raid5: new bh at blk 0x922680 len 0x1000, existing blk 0x4913400 len 0x1000
> raid5: multiple 1 requests for sector 8853376
> raid5: new bh at blk 0x438bf0 len 0x1000, existing blk 0x21c5f80 len 0x1000
> raid5: multiple 1 requests for sector 10771712
> raid5: new bh at blk 0x522ea0 len 0x1000, existing blk 0x2917500 len 0x1000
> raid5: multiple 1 requests for sector 14839424
> raid5: new bh at blk 0x713750 len 0x1000, existing blk 0x389ba80 len 0x1000
> raid5: multiple 1 requests for sector 13024128
> raid5: new bh at blk 0x635dc0 len 0x1000, existing blk 0x31aee00 len 0x1000
> raid5: multiple 1 requests for sector 4734848
> raid5: new bh at blk 0x241ff0 len 0x1000, existing blk 0x120ff80 len 0x1000
> raid5: multiple 1 requests for sector 19194240
> raid5: new bh at blk 0x9270e0 len 0x1000, existing blk 0x4938700 len 0x1000
> raid5: multiple 1 requests for sector 21320064
> raid5: new bh at blk 0xa2a8f0 len 0x1000, existing blk 0x5154780 len 0x1000
> raid5: multiple 1 requests for sector 8871912
> raid5: new bh at blk 0x43afed len 0x1000, existing blk 0x21d7f68 len 0x1000
> raid5: multiple 1 requests for sector 10790248
> raid5: new bh at blk 0x52529d len 0x1000, existing blk 0x29294e8 len 0x1000
> raid5: multiple 1 requests for sector 21338600
> raid5: new bh at blk 0xa2cced len 0x1000, existing blk 0x5166768 len 0x1000
> raid5: multiple 1 requests for sector 10818944
> raid5: new bh at blk 0x2945700 len 0x1000, existing blk 0x528ae0 len 0x1000
> raid5: multiple 1 requests for sector 10825088
> raid5: new bh at blk 0x5296f0 len 0x1000, existing blk 0x294b780 len 0x1000
> raid5: multiple 1 requests for sector 10825216
> raid5: new bh at blk 0x529730 len 0x1000, existing blk 0x294b980 len 0x1000
> raid5: multiple 1 requests for sector 10825984
> raid5: new bh at blk 0x294c580 len 0x1000, existing blk 0x5298b0 len 0x1000
> raid5: multiple 1 requests for sector 10827648
> raid5: new bh at blk 0x529bc0 len 0x1000, existing blk 0x294de00 len 0x1000
> raid5: multiple 1 requests for sector 10828800
> raid5: new bh at blk 0x529e30 len 0x1000, existing blk 0x294f180 len 0x1000
> raid5: multiple 1 requests for sector 10829184
> raid5: new bh at blk 0x529ec0 len 0x1000, existing blk 0x294f600 len 0x1000
> raid5: multiple 1 requests for sector 10832128
> raid5: new bh at blk 0x52a480 len 0x1000, existing blk 0x2952400 len 0x1000
> raid5: multiple 1 requests for sector 10833152
> raid5: new bh at blk 0x2953500 len 0x1000, existing blk 0x52a6a0 len 0x1000
> raid5: multiple 1 requests for sector 10833536
> raid5: new bh at blk 0x52a740 len 0x1000, existing blk 0x2953a00 len 0x1000
> raid5: multiple 1 requests for sector 10834432
> raid5: new bh at blk 0x52a930 len 0x1000, existing blk 0x2954980 len 0x1000
> raid5: multiple 1 requests for sector 19091584
> raid5: new bh at blk 0x91a840 len 0x1000, existing blk 0x48d4200 len 0x1000
> raid5: multiple 1 requests for sector 21256832
> raid5: new bh at blk 0xa22d50 len 0x1000, existing blk 0x5116a80 len 0x1000
> raid5: multiple 1 requests for sector 12925184
> raid5: new bh at blk 0x629c90 len 0x1000, existing blk 0x314e480 len 0x1000
> raid5: multiple 1 requests for sector 12940160
> raid5: new bh at blk 0x62b9e0 len 0x1000, existing blk 0x315cf00 len 0x1000
> raid5: multiple 1 requests for sector 10813056
> raid5: new bh at blk 0x527f40 len 0x1000, existing blk 0x293fa00 len 0x1000
> raid5: multiple 1 requests for sector 33782144
> raid5: new bh at blk 0x101bcc0 len 0x1000, existing blk 0x80de600 len 0x1000
> raid5: multiple 1 requests for sector 19097344
> raid5: new bh at blk 0x91b380 len 0x1000, existing blk 0x48d9c00 len 0x1000
> raid5: multiple 1 requests for sector 10709120
> raid5: new bh at blk 0x51b440 len 0x1000, existing blk 0x28da200 len 0x1000
> raid5: multiple 1 requests for sector 6622464
> raid5: new bh at blk 0x328680 len 0x1000, existing blk 0x1943400 len 0x1000
> raid5: multiple 1 requests for sector 12921344
> raid5: new bh at blk 0x629510 len 0x1000, existing blk 0x314a880 len 0x1000
> raid5: multiple 1 requests for sector 10710144
> raid5: new bh at blk 0x51b660 len 0x1000, existing blk 0x28db300 len 0x1000