question about xfs_repair

Łukasz Korczyk korczyk.l at gmail.com
Tue Jul 5 05:43:10 CDT 2011


I have found this pice of code in xfs_repair.c:

/*
         * Adjust libxfs cache sizes based on system memory,
         * filesystem size and inode count.
         *
         * We'll set the cache size based on 3/4s the memory minus
         * space used by the inode AVL tree and block usage map.
         *
         * Inode AVL tree space is approximately 4 bytes per inode,
         * block usage map is currently 1 byte for 2 blocks.
         *
         * We assume most blocks will be inode clusters.
         *
         * Calculations are done in kilobyte units.
         */

        if (!bhash_option_used || max_mem_specified) {
                unsigned long   mem_used;
                unsigned long   max_mem;
                struct rlimit   rlim;

                libxfs_icache_purge();
                libxfs_bcache_purge();
                cache_destroy(libxfs_icache);
                cache_destroy(libxfs_bcache);

                mem_used = (mp->m_sb.sb_icount >> (10 - 2)) +
                                        (mp->m_sb.sb_dblocks >> (10 + 1)) +
                                        50000;  /* rough estimate of 50MB
overhead */
                max_mem = max_mem_specified ? max_mem_specified * 1024 :
                                                libxfs_physmem() * 3 / 4;

                if (getrlimit(RLIMIT_AS, &rlim) != -1 &&
                                        rlim.rlim_cur != RLIM_INFINITY) {
                        rlim.rlim_cur = rlim.rlim_max;
                        setrlimit(RLIMIT_AS, &rlim);
                        /* use approximately 80% of rlimit to avoid overrun
*/
                        max_mem = MIN(max_mem, rlim.rlim_cur / 1280);
                } else
                        max_mem = MIN(max_mem, (LONG_MAX >> 10) + 1);

                if (verbose > 1)
                        do_log(_("        - max_mem = %lu, icount = %llu, "
                                "imem = %llu, dblock = %llu, dmem =
%llu\n"),
                                max_mem, mp->m_sb.sb_icount,
                                mp->m_sb.sb_icount >> (10 - 2),
                                mp->m_sb.sb_dblocks,
                                mp->m_sb.sb_dblocks >> (10 + 1));

                if (max_mem <= mem_used) {
                        /*
                         * Turn off prefetch and minimise libxfs cache if
                         * physical memory is deemed insufficient
                         */
                        if (max_mem_specified) {
                                do_abort(
        _("Required memory for repair is greater that the maximum
specified\n"
          "with the -m option. Please increase it to at least %lu.\n"),
                                        mem_used / 1024);
                        } else {
                                do_warn(
        _("Not enough RAM available for repair to enable prefetching.\n"
          "This will be _slow_.\n"
          "You need at least %luMB RAM to run with prefetching enabled.\n"),
                                        mem_used * 1280 / (1024 * 1024));
                        }
                        do_prefetch = 0;
                        libxfs_bhash_size = 64;
                } else {
                        max_mem -= mem_used;
                        if (max_mem >= (1 << 30))
                                max_mem = 1 << 30;
                        libxfs_bhash_size = max_mem / (HASH_CACHE_RATIO *
                                        (mp->m_inode_cluster_size >> 10));
                        if (libxfs_bhash_size < 512)
                                libxfs_bhash_size = 512;
                }

                if (verbose)
                        do_log(_("        - block cache size set to %d
entries\n"),
                                libxfs_bhash_size * HASH_CACHE_RATIO);

                if (!ihash_option_used)
                        libxfs_ihash_size = libxfs_bhash_size;

                libxfs_icache = cache_init(libxfs_ihash_size,
                                                &libxfs_icache_operations);
                libxfs_bcache = cache_init(libxfs_bhash_size,
                                                &libxfs_bcache_operations);
        }

I'm lack of programming skills to analyze the code and create formula which
would allow me to predict memory usage of xfs_repair.
Can some one explain how is it calculated, please?

My goal is to be able, to specify minimal requirements.

Cheers

Łukasz Korczyk




W dniu 4 lipca 2011 14:13 użytkownik Dave Chinner <david at fromorbit.com>napisał:

> On Mon, Jul 04, 2011 at 11:41:49AM +0200, Łukasz Korczyk wrote:
> > Helo
> >
> > I have a question I wasn't able to find answer for.
> >
> > Which factors influence memory usage of xfs_repair?
> > Does any formula exist to count possible memory usage?
>
> # xfs_repair -n -vv -m 1 /dev/vda
> Phase 1 - find and verify superblock...
>        - max_mem = 1024, icount = 64, imem = 0, dblock = 4294967296, dmem =
> 2097152
> Required memory for repair is greater that the maximum specified
> with the -m option. Please increase it to at least 2096.
>
> So it's telling me I need at least 2096MB of RAM to repair my 16TB
> filesystem, of which 2097152KB is needed for tracking free space...
>
> I just added 50 million inodes to the filesystem (it now has 50M +
> 10 inodes in it), and the result is:
>
> # xfs_repair -vv -m 1 /dev/vda
> Phase 1 - find and verify superblock...
>        - max_mem = 1024, icount = 50401792, imem = 196882, dblock =
> 4294967296, dmem = 2097152
> Required memory for repair is greater that the maximum specified
> with the -m option. Please increase it to at least 2289.
>
> That is now needs at least another 200MB of RAM to run.
>
> It is worth noting that these numbers are the absolute minimum
> required and repair may require more RAM than this to complete
> successfully. If you only give it this much RAM, it will be slow;
> for best repair performance, the more RAM you can give it the
> better.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20110705/d6e6630e/attachment.htm>


More information about the xfs mailing list