On Thu, 2011-06-30 at 16:16 -0400, Christoph Hellwig wrote:
> Add the gcc printf like attribute to the xfs_repair-internal logging helpers,
> and fix the massive fallout. A large part of it is dealing with the correct
> format for fixed size 64-bit types, but there were a lot of real bug in there,
> including some that lead to crashed when repairing certain corrupted
> filesystems on ARM based systems.
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> Reported-by: Anisse Astier <anisse@xxxxxxxxx>
Eric did the first hunk of this patch in March of 2010,
although he also added __attribute__((noreturn)) to
the abort and error functions. Thank you for completing
the rest of it...
In any case, I have built xfsprogs using this in
x86_64 and i386 environments and don't get warnings
due to printf compatibility issues. So because
of that I'm fine with your patch.
However when I build on ia64 I still get a bunch of
warnings. Some of them are just additional fallout
from adding the printf attribute. Others are similar
problems found in actual printf() calls.
I will create a patch on top of yours and will
send it to you. It think it will be best to
just merge the two together; but I'll leave it
up to you to do as you see fit.
Below I describe why we get some of these warnings,
and how using the PRI* format specifiers isn't
always the right fix.
This whole thing is complicated by inconsistent use
of types for defining 64-bit values--or really, by
the way the Linux kernel is inconsistent.
In xfsprogs, an xfs_ino_t is __uint64_t, which is
u_int64_t, which is defined via <sys/types.h> as:
i386: typedef unsigned long long int u_int64_t;
x86_64: typedef unsigned long int u_int64_t;
ia64: typedef unsigned long int u_int64_t;
The various format specifier macros are defined in
in <inttypes.h> based on __PRI64_PREFIX, which is:
So in user space, any inode number defined using
xfs_ino_t (and more generally, anything defined
using u_int64_t) can be formatted using (for
example) PRIu64 and it comes out fine.
In the Linux kernel xfs has xfs_ino_t defined as
__u64, which is defined via <linux/types.h> as:
typedef __u64 u_int64_t;
and __u64 is then defined via inclusion of
<asm/types.h>. For ia64, <asm/types.h> includes
<asm-generic/int-l64.h>, which defines __u64 this
typedef unsigned long __u64;
But for x86_64, <asm_types.h> includes
<asm-generic/int-ll64.h>, meaning __u64 is:
typedef unsigned long long __u64;
(My i386 ends up with the same definition as
x86_64, but through a slightly different route.)
Anyway, in the kernel, the specific type defined
for an xfs_ino_t (because it's based on __u64)
is either unsigned long or unsigned long long.
Which one you get depends on the architecture.
On native 64-bit systems both are likely to be
the same size, but whether long or long long is
used is still architecture dependent.
Also note that all the byte swap macros (like
be64_to_cpu()) are defined using __u64 the type.
Now __u64 is also a type visible in user
space, and it's one we make use of. But
because of the inconsistency described above,
if you use __u64 to define a 64-bit type,
you can't use things like PRIu64 to determine
the right format specifier to use, because
it's not predictable whether it will match
the underlying type. (On ia64 it works, but
on x86_64 it will not.) And of course you
can't just use %llu either.
So whenever you want to format something
defined as __u64 (or, which is the result
of a byte swap macro), the best you can do
is use %llu in the format string, then cast
the value being printed to (unsigned long long).
Kind of a mess.
> (mp->m_sb.sb_rootino == NULLFSINO ? "(NULLFSINO) ":""),
> if (!no_modify)
> - _("resetting superblock root inode pointer to %lu\n"),
> + _("resetting superblock root inode pointer to %u\n"),
. . .