lkcd
[Top] [All Lists]

large crash dumps

To: lkcd@xxxxxxxxxxx
Subject: large crash dumps
From: Larry Cohen <Larry.Cohen@xxxxxxxxxxxx>
Date: Mon, 19 Mar 2001 10:11:18 -0500
Sender: owner-lkcd@xxxxxxxxxxx
Just trying to get the final answer on trying to reduce the size of core
dumps (my 250 megabytes dumps
fill up a disk really fast).
I  think I read two conflicting messages about this.   The FAQ says that
the entire memory has to
be dumped but in the archive I see a message from Dave Winchell that
Mission Critical Linux's mcore
can selectively dump pages which dramatically reduces the dump sizes.
I also noticed that I could further reduce the LKCD dumps with gzip (by
50%).   Would it be possible at least
to improve the compression by using zlib ?

Its been a challenge but with the hardware I have I could not get mcore
working.
In order to get lkcd working I had to comment out code in
arch/i386/kernel/apic.c.

void disable_local_APIC(void)
{
        unsigned long value;

        clear_local_APIC();

        /*
         * Disable APIC (implies clearing of registers
         * for 82489DX!).
         */
#ifdef notdef
        value = apic_read(APIC_SPIV);
        value &= ~(1<<8);
        apic_write_around(APIC_SPIV, value);
#endif



I'm not really sure why this works or what the side effects are.  But if
I dont do it I the system will
hang trying to write out the dump header.

-Larry Cohen


<Prev in Thread] Current Thread [Next in Thread>