On Sat, Feb 01, 2014 at 11:48:48AM +0800, Jeff Liu wrote:
> On 02/01 2014 00:28 AM, Mark Tinguely wrote:
> > On 01/31/14 09:51, Jeff Liu wrote:
> >> On 01/31 2014 23:30 PM, Eric Sandeen wrote:
> >>> On 1/31/14, 9:28 AM, Jeff Liu wrote:
> >>>> Well, when I looking through our bitmap source, I once thought if
> >>>> we can replace the current code with the generic bitmap library.
> >>>> However, our map is uint rather than unsigned long...
> >>>
> >>> Technically the unsigned long (pointer) is just the bitmap address,
> >>> I think.
> >>
> >> Yeah, so this might worth to try on long terms.
> >
> > The blf_data_map[] is int aligned, not long aligned.
> > You could reflect the alignment difference in the offset or
> > change the alignment in the structure.
>
> For now, I think we can not simply turn to generic bitmap just because
> of the alignment difference on 64-bits OS.
The bitmaps end up on disk (in the log), so replacing the
implementation with a generic implementation is something we need to
be very careful about.
IMO, we should be getting rid of the bitmaps from the
xfs_buf_log_item first (by moving to a low byte/high byte offset
range), then we only have to worry about bitmaps when doing log
recovery after a kernel upgrade on a filesystem with a dirty log.
Getting rid of the bitmaps also solves a scalability problem with
large block sizes tracking all the changes in buffer - we burn a
huge amount of CPU walking bits when logging 64k directory buffers:
+ 21.19% [kernel] [k] xfs_dir3_leaf_check_int
+ 12.20% [kernel] [k] memcpy
+ 9.29% [kernel] [k] xfs_next_bit
+ 5.04% [kernel] [k] xfs_buf_offset
+ 3.63% [kernel] [k] xfs_buf_item_format
+ 3.59% [kernel] [k] xfs_buf_item_size_segment
The logging of xfs_buf_log_items there is consuming >30% of the CPU
being used under this workload (xfs_dir3_leaf_check_int() is high
because this is from a debug kernel.)
IOWs, we should work to remove the bitmap code from general
operations first, then replace the remaining legacy log recovery
code with the generic bitmap implemention....
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
|