>
> > while (fields) {
> > - f = (xfs_sb_field_t)xfs_lowbit64((__uint64_t)fields);
> > + f = (xfs_sb_field_t)find_first_bit((unsigned long *)&fields,64);
>
> I don't think we should add the case here but rather pass the fields
> varialble as an unsigned long to start with.
I actually did this first, but ran into some issues I unfortunately can't
remember right
now so I reverted it to the cast.
>
> > @@ -1428,11 +1428,11 @@ xfs_mod_sb(xfs_trans_t *tp, __int64_t fi
> >
> > /* find modified range */
> >
> > - f = (xfs_sb_field_t)xfs_lowbit64((__uint64_t)fields);
> > + f = (xfs_sb_field_t)find_first_bit((unsigned long *)&fields, 64);
> > ASSERT((1LL << f) & XFS_SB_MOD_BITS);
> > first = xfs_sb_info[f].offset;
> >
> > - f = (xfs_sb_field_t)xfs_highbit64((__uint64_t)fields);
> > + f = (xfs_sb_field_t)fls64((__uint64_t)fields) - 1;
>
> Same here.
The casts here are actually not needed, but I was too lazy to remove them
(they also don't hurt)
>
> > +/* All callers check for 0 arguments already; so no -1 handling */
> > +static inline int xfs_rtlobit(unsigned long v)
> > +{
> > + return find_first_bit(&v, 32);
> > +}
> > +
> > +#define XFS_RTLOBIT(w) xfs_rtlobit(w)
>
> I think just a
>
> #define XFS_RTLOBIT(w) find_first_bit(&(w), 32)
>
> should be fine.
Nope -- not all callers pass sufficiently aligned unsigned longs
as ffb requires.
> Or make it just an inline, but not both a macro an
> an inline.
That should be probably done as a separate patch because there
are much more macros that need this treatment in the rt code.
-Andi
|