Hi,
On 01/23/2015 09:06 PM, Brian Foster wrote:
> On Fri, Jan 23, 2015 at 08:46:59PM +0700, Dewangga Bachrul Alam wrote:
>> Hi,
>>
>> The device reported like this :
>>
>> $ blockdev --getss --getpbsz --getiomin --getioopt /dev/dm-3
>> 512
>> 4096
>> 4096
>> 0
>>
>
> What is dm-3? Is that a logical volume based on a volume group on top of
> your physical array? It might be good to get the equivalent data for the
> array device (md?) and at least one of the physical devices (sd?).
>
dm-3 is logical partition, not my physical array. Here is my partition
table :
--
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
ââsda1 8:1 0 500M 0 part /boot
ââsda2 8:2 0 1.8T 0 part
ââvg_catalystdb01-lv_root (dm-0) 253:0 0 50G 0 lvm /
ââvg_catalystdb01-lv_swap (dm-1) 253:1 0 31.5G 0 lvm [SWAP]
ââvg_catalystdb01-lv_home (dm-2) 253:2 0 48.8G 0 lvm /home
ââvg_catalystdb01-lv_database (dm-3) 253:3 0 97.7G 0 lvm
/var/lib/mysql
ââvg_catalystdb01-lv_tmp (dm-4) 253:4 0 3.9G 0 lvm /tmp
ââvg_catalystdb01-lv_var (dm-5) 253:5 0 19.5G 0 lvm /var
ââvg_catalystdb01-lv_audit (dm-6) 253:6 0 3.9G 0 lvm
/var/log/audit
ââvg_catalystdb01-lv_log (dm-7) 253:7 0 7.8G 0 lvm /var/log
--
XFS file system is only mounted on vg_catalystdb01-lv_database (dm-3).
>> Then, the sector size should be 512. Don't know why it can be 4096. :( I
>> will try to backup them and reformat. Any suggestion for formating on
>> raid-10 array? The device is 4 x 1TB drives.
>>
>
> According to the above you have 4k physical sectors and 512b logical
> sectors. IIUC, this means mkfs will use the physical size by default,
> but you can specify and the device can handle down to 512 sectors.
>
> From skimming through the link posted earlier, it sounds like you have
> an application that has a hardcoded dependency on 512b direct I/O
> requirements (e.g., buffer alignment) rather than being configurable..?
> Can you disable direct I/O and verify whether that works? Regardless, it
> might be wise to test out this 512b sector configuration (perhaps with a
> single or spare drive?) and verify this fixes the problem you're trying
> to solve before reconfiguring everything.
>
If I disable directio, the application works well.
Then, I've reformat the existing partition, and enable directio
parameter. It's works. It's the new result.
$ xfs_info /dev/dm-3
meta-data=/dev/mapper/vg_catalystdb01-lv_database isize=256
agcount=16, agsize=1600000 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=25600000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=12500, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Thanks bro.
> Brian
>
>> On 01/23/2015 08:39 PM, Brian Foster wrote:
>>> On Fri, Jan 23, 2015 at 08:04:44PM +0700, Dewangga Bachrul Alam wrote:
>>>> Hi,
>>>>
>>>> I'm new to XFS, I have RAID-10 array with 4 disk, when I check with
>>>> xfs_info, the information print like this.
>>>>
>>>> $ xfs_info /var/lib/mysql
>>>> meta-data=/dev/mapper/vg_catalystdb01-lv_database isize=256
>>>> agcount=16, agsize=1600000 blks
>>>> = sectsz=4096 attr=2, projid32bit=0
>>>> data = bsize=4096 blocks=25600000, imaxpct=25
>>>> = sunit=0 swidth=0 blks
>>>> naming =version 2 bsize=4096 ascii-ci=0
>>>> log =internal bsize=4096 blocks=12500, version=2
>>>> = sectsz=4096 sunit=1 blks, lazy-count=1
>>>> realtime =none extsz=4096 blocks=0, rtextents=0
>>>>
>>>> Is it possible to change `sectsz` value to 512 without re-format it? Or
>>>> any suggestion? I have issue with current sector size, my TokuDB
>>>> engines[1] can't start because of this.
>>>>
>>>
>>> The only way to set things like sector size, block size, etc. is to
>>> reformat. I believe the default sector size is dependent on the physical
>>> device. You might want to report the following from your array device
>>> and perhaps from some or all of the member devices:
>>>
>>> blockdev --getss --getpbsz --getiomin --getioopt <device>
>>>
>>> Brian
>>>
>>>> [1] https://groups.google.com/forum/#!topic/tokudb-user/kvQFJLCmKwo
>>>>
>>>> _______________________________________________
>>>> xfs mailing list
>>>> xfs@xxxxxxxxxxx
>>>> http://oss.sgi.com/mailman/listinfo/xfs
|