linux-origin
[Top] [All Lists]

Re: Tlb shutdown bit

To: ralf@xxxxxxxxxxx (Ralf Baechle)
Subject: Re: Tlb shutdown bit
From: kanoj@xxxxxxxxxxxxxxxxxxx (Kanoj Sarcar)
Date: Fri, 7 Apr 2000 16:16:22 -0700 (PDT)
Cc: sprasad@xxxxxxxxxxxxxxxxxxx (Srinivasa Prasad Thirumalachar), ralf@xxxxxxxxxxx (Ralf Baechle), linux-origin@xxxxxxxxxxx, sprasad@xxxxxxxxxxx, peltier@xxxxxxxxxxxxxxxxxxxx, mahdi@xxxxxxxxxxxxxxxxxxxx
In-reply-to: <20000407142859.C12447@uni-koblenz.de> from "Ralf Baechle" at Apr 07, 2000 02:28:59 PM
Sender: owner-linux-origin@xxxxxxxxxxx
> 
> On Fri, Apr 07, 2000 at 12:11:46PM -0700, Kanoj Sarcar wrote:
> 
> > For now, I am clearing the TS bit on entry into the kernel, and things
> > aren't too crazy ... except I keep on wondering what is making the 
> > TS get set, and whether it has any connection to me not being able
> > to talk to processors on other nodes (via prom routines).
> 
> At what time are you trying to launch the other processors?  The experience

Linux kernel wise, during smp_boot_cpus(). Yes, I was thinking of doing 
this earlier (just as a test), right after the master comes into the
kernel. This is _probably_ uninteresting to the cpu folks, so we can 
take this offline (on linux-origin). More interestingly, the slave cpu 
on the same node as the master does get launched, the other slaves on
the other nodes fail. Srinivas said he is going to look into prom
debugging this problem (it might be a xkphys/ckseg0 issue in the 
prom itself).

Kanoj

> I made over the last few years is that most firware is rather fragile,
> so my own code for launching the CPUs launches them in the very early
> startup phase and leaves them waiting with interrupts disabled in a
> spinlock.  Later on the kernel actually launches the processors by
> unlocking these spinlocks.  This seem to have mostly solved the problems
> that I was observing with my own code.
> 
>   Ralf
> 


<Prev in Thread] Current Thread [Next in Thread>