On Thu, Jun 27, 2013 at 08:02:05AM -0500, Serge Hallyn wrote:
> Quoting Dave Chinner (david@xxxxxxxxxxxxx):
> > On Wed, Jun 26, 2013 at 05:30:17PM -0400, Dwight Engen wrote:
> > > On Wed, 26 Jun 2013 12:09:24 +1000
> > > Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > > > We do need to decide on the di_uid that comes back from bulkstat.
> > > > > Right now it is returning on disk (== init_user_ns) uids. It looks
> > > > > to me like xfsrestore is using the normal vfs routines (chown,
> I might not be helpful here, (as despite having used xfs for years
> I've not used these features) but feel like I should try based on
> what I see in the manpages. Here is my understanding:
> Assume you're a task in a child userns, where you have host uids
> 100000-110000 mapped to container uids 0-10000,
> 1. bulkstat is an xfs_ioctl command, right? It should return the mapped
> uids (0-10000).
> 2. xfsdump should store the uids as seen in the caller's namespace. If
> xfsdump is done from the container, the dump should show uids 0-10000.
So when run from within a namespace, it should filter and return
only inodes that match the uids/gids mapped into the namespace?
That can be done, it's just a rather inefficient use of bulkstat
(which is primarily there for efficiency reasons).
Here's a corner case. Say I download a tarball from somewhere that
has uids/gids inside it, and when I untar it it creates uids/gids
outside the namespaces mapped range of [0-10000]. What happens then?
What uids do we end up on disk, and how do we ensure that the
bulkstat filter still returns those inodes?
> 3. xfsrestore should use be run from the desired namespace. If you did
> xfsdump from the host ns, you should then xfsrestore from the host ns.
> Then inside the container those uids (100000-110000) will be mapped
> to your uids (0-10000).
> 4. If you xfsdump in this container, then xfsrestore in another
> container where you have 200000-210000 mapped to 0-10000, the dump
> image will have uids 0-10000. The restored image will have container
> uids 0-10000, while on the underlying host media it will be uids
> 5. If you xfsdump in this container then xfsrestore on the host, then
> the host uids 0-10000 will be used on the underlying media. The
> container would be unable to read this files as the uids do not map
> into the container.
Yes, that follows from 1+2. We'll need some documentation in
the dump/restore man pages for this, and I'd suggest that the
namespace documentation/man pages get this sort of treatment, too.